Neuroscientist Jared Cooney Horvath's new book and article argue that when we gave students laptops, student performance declined, so the tech broke their brains.
That story skips the real culprits:
high-stakes standardized testing that reshaped public schooling, and
inequitable access to effective models of learning, not to devices.
Students in well-resourced schools are more likely to experience project-based, passion-driven models where technology is used for real-world work. Students in under-resourced and segregated schools are more likely to sit in “drill and kill” environments, whether the drill is on paper or on a screen.
Drill and kill is the problem. The model and the instruction, not the laptop, drive outcomes.
When Testing Took Over, Powerful Models Got Squeezed Out
In the 1990s and early 2000s, the No Child Left Behind Act (NCLB) locked public schools into annual high-stakes testing in reading and math. Research on that era is clear: it narrowed the curriculum and increased time spent on test preparation, especially in high-poverty schools.
At the same time, computers were entering classrooms. The timing is not an accident. As accountability pressure grew, many districts used technology to deliver more “practice” and test-aligned content.
Models like the Schoolwide Enrichment Model, Montessori, and other talent development approaches depend on flexible time, original investigations, and creative products. Scholars such as Joseph Renzulli warned that rigid standards and high-stakes testing could turn standards into a “new cage,” crowding out enrichment and creativity.
Public Montessori schools show this tension clearly. Studies note that accountability demands for state test scores push public Montessori programs to compromise core principles built around multi-age, student-directed exploration with minimal testing.
So the period people point to as “the time laptops ruined learning” is also the period when high-stakes testing made it difficult or impossible for many public schools to adopt or sustain Schoolwide Enrichment, Montessori, and similar models at scale.
We did not just add laptops. We changed the rules so that deep, interest-driven learning became harder to run in public systems.
SAMR: Why Shallow Tech Use Makes Things Worse
The Substitution, Augmentation, Modification, Redefinition (SAMR) model, created by Ruben Puentedura, is a simple way to think about tech integration.
Substitution: typing instead of handwriting.
Augmentation: adding spellcheck, comments, or formatting.
Modification: redesigning tasks, for example, real-time collaborative writing.
Redefinition: doing tasks that were not realistic before, such as publishing multimedia projects to authentic audiences or collaborating globally.
Most uses of technology in drill-heavy environments never get past substitution, and often are not even good substitution. A bad worksheet becomes a bad online quiz. A shallow test becomes a shallow test with a progress bar.
The Organisation for Economic Co-operation and Development (OECD) says the same thing in its “Students, Computers and Learning” report: technology can amplify great teaching, but great technology cannot compensate for poor pedagogy and unchanged models.
When tech is stuck at the lower levels of SAMR inside a test prep model, results are predictably weak or worse. That is not because laptops melt brains. It is because we are using powerful tools to automate low-quality tasks.
What Programme for International Student Assessment (PISA) Actually Shows
Critics often point to the Programme for International Student Assessment (PISA) to argue that computers hurt learning.
The OECD’s own analysis shows something more nuanced:
Students who use computers moderately at school tend to perform slightly better than those who rarely use them.
Students who use computers very frequently tend to do worse, even after controlling for background.
That is an inverted U curve. It matches what you would expect if:
Limited, purposeful tech use supports learning.
Very intensive, unfocused, or low-level use correlates with worse outcomes.
And crucially, PISA is correlational; it does not show that computers caused the decline, only that heavy, low-quality use tends to show up where scores are already lower.
The same reports note that heavy spending on devices on its own has not produced big gains in reading, mathematics, or science, and emphasize that how technology is used and whether students can navigate digital texts matter as much as access.
So PISA does not say “laptops broke kids.” It says that bolting devices onto a drill and test model, and especially overusing them, is a losing strategy.
The Data Story Is Skewed By Who Is Being Tested
There is another missing piece. The population of students taking these tests has changed.
The National Center for Education Statistics (NCES) reports that the U.S. “status dropout” rate for 16 to 24-year-olds fell from 7.0 percent in 2012 to 5.3 percent in 2022, with declines across every major racial and ethnic group.
At the same time, the percentage of public school students who are English learners increased from 9.4 percent (4.6 million students) in 2011 to 10.6 percent (5.3 million students) in 2021.
That means:
More students who would previously have dropped out are still in school and being tested.
More non native English speakers, often in under-resourced schools, are taking tests largely in English.
Layer on persistent poverty and segregation, and it is obvious that average scores reflect who is sitting for the tests, not just what tools they are using.
If you ignore dropout, language, and disadvantage, then blame laptops for every score change, you are not doing serious analysis.
Teacher Preparation Has Not Caught Up
Most teachers were never trained to use technology beyond substitution.
The International Society for Technology in Education (ISTE) and other reviews of teacher preparation programs show that many new teachers feel underprepared to integrate technology meaningfully into instruction.
As artificial intelligence arrives, early studies suggest schools of education tend to frame AI as a plagiarism threat rather than as a tool for feedback, differentiation, or creative tasks.
So we have:
Devices arriving quickly.
High-stakes tests shaping what “counts.”
Teacher prep that rarely takes educators past substitution on the SAMR ladder.
Under those conditions, most tech use will be shallow, and outcomes will reflect that.
Where EdTech Leaders Should Focus
If you work in or around education technology, the takeaway is not “get rid of laptops.” It is “stop letting weak models and shallow uses define the narrative.”
Key priorities:
Reclaim space from high-stakes testing so models like Schoolwide Enrichment, Montessori, Big Picture, and Agile Learning can exist in public schools, not just in boutique or private settings.
Push technology up the SAMR ladder, toward modification and redefinition, especially for real-world projects, creative production, and authentic audiences.
Demand that schools of education treat technology and AI integration as core pedagogy, not extras.
Read test data through the lens of dropout, language, and poverty, not only through the lens of device counts.
You can absolutely decide when to close laptops and pull out paper, or when to limit phones. That is tactical.
Strategically, the question is: are we building models where technology supports real-world, passion-driven learning for all students, or models where tech is just a faster worksheet?
Laptops did not take away their brains. High-stakes testing, inequitable school models, shallow tech integration, and weak preparation did. Fix those, and the same devices people blame today become some of the best tools we have for helping students do meaningful work that actually uses their brains for real and relevant learning.